Ignoring Data in Court: An Idealized Decision-Theoretic Analysis

نویسنده

  • Peter D. Grünwald
چکیده

We give a decision-theoretic analysis of a central issue regarding statistical evidence in court: are there circumstances under which it is reasonable to ignore the part of the data that gave rise to suspicion in the first place? We heuristically show that under a minimax/robust Bayesian analysis, this part of the data should in fact be treated differently from any additional data one might have. In some situations, even completely ignoring this part of the data can be a minimax optimal strategy. Lucia de B. is a Dutch nurse who worked in the Juliana children’s hospital in The Hague from 1999 to 2001. She happened to be on duty whenever a patient in her ward suddenly died or suddenly needed to be reanimated. The hospital’s management became suspicious and notified the police. The police then gathered data about Lucia’s shifts in the Red Cross hospital, a hospital where she had worked a few years previously. Thus, these data were only taken into account later, after the investigation against Lucia began. In 2004, the Court of Appeals found her guilty of 7 murders and 3 murder attempts. Statistics played a crucial role in the verdict; a statistician calculated that what happened “could not have been a coincidence”. Despite warnings by the statistician, the court did not need much further evidence to change “not a coincidence” into “murder.” The statistical analysis itself was flawed in several respects. The question is: can we do better? When presenting the case in the UCL evidence seminar (March 20th 2007), I claimed that a purely Bayesian approach is problematic here. From a Bayesian point of view, we would like to determine posterior probabilities that Lucia is innocent or gulty. This requires a prior probability that Lucia is guilty. The problem is that there are a broad range of priors that may be deemed “reasonable,” and these may differ by several orders of magnitude. Therefore, I argued, one should either adopt a “robust Bayesian” approach, adopting a set of priors rather than a single one; or one should adopt a Neyman-Pearson (NP) style hypothesis test, but, to avoid selection bias, one should then ignore the first data set, and only use the second one. The latter proposal generated a lot of resistance. I have now studied both suggestions in more detail, focusing on the question whether it can be sensible to ignore, or at least treat differently, the first data set. Following a suggestion by C. Manski, I have taken a decision-theoretic approach. The result is the present note.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

NGTSOM: A Novel Data Clustering Algorithm Based on Game Theoretic and Self- Organizing Map

Identifying clusters is an important aspect of data analysis. This paper proposes a noveldata clustering algorithm to increase the clustering accuracy. A novel game theoretic self-organizingmap (NGTSOM ) and neural gas (NG) are used in combination with Competitive Hebbian Learning(CHL) to improve the quality of the map and provide a better vector quantization (VQ) for clusteringdata. Different ...

متن کامل

A Critical Analysis of the Judicial Decision by the General Board of the Administrative Justice Court on the Tax Exemption on the Cost of Welfare Services

The Administrative Justice Court annulled the directive that was issued by the Iranian National Tax Administration. According to the directive, kindergarten, food, commuting, … subsidies are not exempt from the payroll tax. In this research, which is conducted with the descriptive-analytical approach, the exemption or the lack of exemption of the subsidies is studied with regard to the tax law....

متن کامل

The Roberts Court and the Limits of Antitrust

Numerous commentators have characterized the Roberts Court’s antitrust decisions as radical departures that betray a pro-business, anticonsumer bias. That characterization is inaccurate. Although some of the decisions do represent significant changes from past practice, the “probusiness/anti-consumer” characterization fails to appreciate the fundamental limits of antitrust, a body of law that r...

متن کامل

Why recognition is rational: Optimality results on single-variable decision rules

The Recognition Heuristic (Gigerenzer & Goldstein, 1996; Goldstein & Gigerenzer, 2002) makes the counter-intuitive prediction that a decision maker utilizing less information may do as well as, or outperform, an idealized decision maker utilizing more information. We lay a theoretical foundation for the use of single-variable heuristics such as the Recognition Heuristic as an optimal decision s...

متن کامل

Observability and “Second-Order Acts”∗

This note questions the behavioral content of second-order acts and their use in decision theoretic models. We show that there can be no verification mechanism to determine what the decision maker receives under a second-order act. This impossibility applies even in idealized repeated experiments where infinite data can be observed. ∗We thank Paolo Ghirardato, Ben Polak, Kyoungwon Seo, and Marc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009